Learning Generative Models with the Up-Propagation Algorithm

نویسندگان

  • Jong-Hoon Oh
  • H. Sebastian Seung
چکیده

Up propagation is an algorithm for inverting and learning neural network generative models Sensory input is processed by inverting a model that generates patterns from hidden variables using top down connections The inversion process is iterative utilizing a negative feedback loop that depends on an error signal propagated by bottom up connections The error signal is also used to learn the generative model from examples The algorithm is benchmarked against principal component analysis in experiments on images of handwritten digits In his doctrine of unconscious inference Helmholtz argued that perceptions are formed by the interaction of bottom up sensory data with top down expectations According to one interpretation of this doctrine perception is a procedure of sequen tial hypothesis testing We propose a new algorithm called up propagation that realizes this interpretation in layered neural networks It uses top down connections to generate hypotheses and bottom up connections to revise them It is important to understand the di erence between up propagation and its an cestor the backpropagation algorithm Backpropagation is a learning algorithm for recognition models As shown in Figure a bottom up connections recognize patterns while top down connections propagate an error signal that is used to learn the recognition model In contrast up propagation is an algorithm for inverting and learning generative models as shown in Figure b Top down connections generate patterns from a set of hidden variables Sensory input is processed by inverting the generative model recovering hidden variables that could have generated the sensory data This operation is called either pattern recognition or pattern analysis depending on the meaning of the hidden variables Inversion of the generative model is done iteratively through a negative feedback loop driven by an error signal from the bottom up connections The error signal is also used for learning the connections

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning to Adapt by Minimizing Discrepancy

We explore whether useful temporal neural generative models can be learned from sequential data without back-propagation through time. We investigate the viability of a more neurocognitively-grounded approach in the context of unsupervised generative modeling of sequences. Specifically, we build on the concept of predictive coding, which has gained influence in cognitive science (Rauss and Pour...

متن کامل

Learning Generative ConvNet with Continuous Latent Factors by Alternating Back-Propagation

The supervised learning of the discriminative convolutional neural network (ConvNet or CNN) is powered by back-propagation on the parameters. In this paper, we show that the unsupervised learning of a popular top-down generative ConvNet model with latent continuous factors can be accomplished by a learning algorithm that consists of alternatively performing back-propagation on both the latent f...

متن کامل

Expectation-Propogation for the Generative Aspect Model

The generative aspect model is an extension of the multinomial model for text that allows word probabilities to vary stochastically across docu­ ments. Previous results with aspect models have been promising, but hindered by the computa­ tional difficulty of carrying out inference and learning. This paper demonstrates that the sim­ ple variational methods of Blei et a!. (200 I) can lead to inac...

متن کامل

Dynamic Obstacle Avoidance by Distributed Algorithm based on Reinforcement Learning (RESEARCH NOTE)

In this paper we focus on the application of reinforcement learning to obstacle avoidance in dynamic Environments in wireless sensor networks. A distributed algorithm based on reinforcement learning is developed for sensor networks to guide mobile robot through the dynamic obstacles. The sensor network models the danger of the area under coverage as obstacles, and has the property of adoption o...

متن کامل

Stochastic Back-propagation and Variational Inference in Deep Latent Gaussian Models

We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and learning. Our algorithm introduces a recognition model to represent approximate posterior distributions, and that acts as a stochastic encoder of the data. We develop stochastic backpropagation – ru...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1997